Goto

Collaborating Authors

 build ai


Can We Build AI That Does Not Harm Queer People?

Communications of the ACM

AI safety is a contentious topic. While some prominent figures of the AI community have argued that destructive general artificial intelligence (AI) is on the horizon, others derided their warning as a marketing stunt to sell large language models (LLMs). "If the call for'AI safety' is couched in terms of protecting humanity from rogue AIs, it very conveniently displaces accountability away from the corporations scaling harm in the name of profits," tweeted Emily Bender, a professor of computational linguistics at the University of Washington. Focusing on potential future harm from ever more powerful AI systems distracts from harm that is already happening today. Most of us do not set out to make software that is actively harmful.


Google defends scrapping AI pledges and DEI goals in all-staff meeting

The Guardian

Google's executives gave details on Wednesday on how the tech giant will sunset its diversity initiatives and defended dropping its pledge against building artificial intelligence for weaponry and surveillance in an all-staff meeting. Melonie Parker, Google's former head of diversity, said the company was doing away with its diversity and inclusion employee training programs and "updating" broader training programs that have "DEI content". It was the first time company executives have addressed the whole staff since Google announced it would no longer follow hiring goals for diversity and took down its pledge not to build militarized AI. The chief legal officer, Kent Walker, said a lot had changed since Google first introduced its AI principles in 2018, which explicitly stated Google would not build AI for harmful purposes. He said it would be "good for society" for the company to be part of evolving geopolitical discussions in response to a question about why the company removed prohibitions against building AI for weapons and surveillance.


The Download: digital twins, and where AI data really comes from

MIT Technology Review

Steven Niederer, a biomedical engineer at the Alan Turing Institute and Imperial College London, has a cardboard box filled with 3D-printed hearts. Each of them is modeled on the real heart of a person with heart failure, but Niederer is more interested in creating detailed replicas of people's hearts using computers. These "digital twins" are the same size and shape as the real thing. They work in the same way. But they exist only virtually.


This is where the data to build AI comes from

MIT Technology Review

Their findings, shared exclusively with MIT Technology Review, show a worrying trend: AI's data practices risk concentrating power overwhelmingly in the hands of a few dominant technology companies. In the early 2010s, data sets came from a variety of sources, says Shayne Longpre, a researcher at MIT who is part of the project. It came not just from encyclopedias and the web, but also from sources such as parliamentary transcripts, earning calls, and weather reports. Back then, AI data sets were specifically curated and collected from different sources to suit individual tasks, Longpre says. Then transformers, the architecture underpinning language models, were invented in 2017, and the AI sector started seeing performance get better the bigger the models and data sets were.


Microsoft will build AI into new laptops, firing shot at Apple

Washington Post - Technology News

Chief executive Satya Nadella said adding computer chips that are tailored to run AI technology to the company's PCs and tablets will make AI tools and features run faster than if the technology runs through an internet connection, as most chatbots do today. Loading AI into its computers and marketing them as the best way to access AI could give the company a new edge in convincing consumers to choose a PC instead of a computer made by Apple.


Adept Raises $350 Million To Build AI That Learns How To Use Software For You

#artificialintelligence

Traditionally, more complex software is harder to use. Nabeel Hyatt, general partner at Spark Capital, says that if Adept's team (pictured) achieves its full potential, this may no longer be the case. Chatbots rule the day in AI for now, but soon, Adept cofounder David Luan predicts, AI won't just display unsettlingly human responses to typed queries, it will execute them. It will do what you would do with your computer for you. Granted such technology is still years away, but the speed of innovation in the AI space means we're talking about two to three years, according to Luan -- not decades.


Studying the brain to build AI that processes language as people do

#artificialintelligence

AI has made impressive strides in recent years, but it's still far from learning language as efficiently as humans. For instance, children learn that "orange" can refer to both a fruit and color from a few examples, but modern AI systems can't do this nearly as efficiently as people. This has led many researchers to wonder: Can studying the human brain help to build AI systems that can learn and reason like people do? Today, Meta AI is announcing a long-term research initiative to better understand how the human brain processes language. In collaboration with neuroimaging center Neurospin (CEA) and INRIA we're comparing how AI language models and the brain respond to the same spoken or written sentences.


Adept aims to build AI that can automate any software process – TechCrunch

#artificialintelligence

In 2016 at TechCrunch Disrupt New York, several of the original developers behind what became Siri unveiled Viv, an AI platform that promised to connect various third-party applications to perform just about any task. The pitch was tantalizing -- but never fully realized. Samsung later acquired Viv, folding a pared-down version of the tech into its Bixby voice assistant. Six years later, a new team claims to have cracked the code to a universal AI assistant -- or at least to have gotten a little bit closer. At a product lab called Adept that emerged from stealth today with $65 million in funding, they are -- in the founders' words -- "build[ing] general intelligence that enables humans and computers to work together creatively to solve problems."


Researchers Build AI That Builds AI

#artificialintelligence

A hypernetwork aims to find the best deep neural network architecture to solve a given task. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a "hypernetwork" that could speed up the training of neural networks. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. The work may also have deeper theoretical implications. The name outlines the approach.


Researchers Build AI That Builds AI

#artificialintelligence

Artificial intelligence is largely a numbers game. When deep neural networks, a form of AI that learns to discern patterns in data, began surpassing traditional algorithms 10 years ago, it was because we finally had enough data and processing power to make full use of them. Today's neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons. The goal is to find nearly ideal values for them, a process known as optimization, but training the networks to reach this point isn't easy.